Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GPU] Skip broadcast when input and output shapes are identical #23331

Merged
merged 3 commits into from
Mar 13, 2024

Conversation

e-ddykim
Copy link
Contributor

@e-ddykim e-ddykim commented Mar 7, 2024

Details:

  • This PR makes some Broadcast layers to be skipped if input and output shapes are same.

Tickets:

  • 135100

@e-ddykim e-ddykim requested review from a team as code owners March 7, 2024 12:21
@github-actions github-actions bot added the category: GPU OpenVINO GPU plugin label Mar 7, 2024
@e-ddykim e-ddykim force-pushed the gpu-skip-broadcast branch 2 times, most recently from 3383951 to 7202a71 Compare March 11, 2024 00:49
// In this case, broadcast can not be optimized due to different input and output shapes.
if (node.have_user_with_type<reorder>() && node.get_users().size() == 1)
return;
node.can_be_optimized(true);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any more possibility to reduce the target scope? I am afraid this will let most of the broadcasts be marked as can_be_opt so that less chance of the memory reuse, though the ratio of the actual optimizable cases is considered as small.. For example could we exclude more of "apparently not optimizable cases" like [-1, 1, -1, -1] [-1, 16, -1, -1]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh... I'll try to reduce the target scope as you reviewed. Thank you.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added an additional condition not to skip broadcast. Thank you.

@e-ddykim e-ddykim force-pushed the gpu-skip-broadcast branch from 7202a71 to c963ea2 Compare March 12, 2024 10:18
@yeonbok yeonbok enabled auto-merge March 13, 2024 01:24
@yeonbok yeonbok added this pull request to the merge queue Mar 13, 2024
Merged via the queue into openvinotoolkit:master with commit be0c954 Mar 13, 2024
92 checks passed
praasz pushed a commit to praasz/openvino that referenced this pull request Mar 13, 2024
…vinotoolkit#23331)

### Details:
- This PR makes some `Broadcast` layers to be skipped if input and
output shapes are same.

### Tickets:
 - 135100
vishniakov-nikolai pushed a commit to vishniakov-nikolai/openvino that referenced this pull request Mar 13, 2024
…vinotoolkit#23331)

### Details:
- This PR makes some `Broadcast` layers to be skipped if input and
output shapes are same.

### Tickets:
 - 135100
alvoron pushed a commit to alvoron/openvino that referenced this pull request Apr 29, 2024
…vinotoolkit#23331)

### Details:
- This PR makes some `Broadcast` layers to be skipped if input and
output shapes are same.

### Tickets:
 - 135100
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: GPU OpenVINO GPU plugin
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants